MULTIVARIATE FORECASTING WITH RECURRENT NEURAL NETWORKS

FILENAME: Analysis Notebook.ipynb

PROJECT: Multivariate Financial Forecasting

DATE CREATED:24-APR-20

DATE UPDATED:24-APR-20

TASK: Develop and implement a recurrent neural network

PURPOSE: Given a multivariate dataset, forecast and predict the corresponding response value for each record

INTENT: The purpose of this project is to conduct exploratory analysis of the provided data set and apply both supervised and unsupervised algorithms in order to extract meaniningful information in support for future open source analysis. The project is broken down into two separate projects, with each project having four (4) distinct phases:

PROJECT: Randomized Budget Data

  1. Environment Setup

  2. Data ETL

  3. Data Exploration

  4. Model Development

Create randomm arrays to store the test values:

  1. YEAR +3: yr3_forecast

  2. YEAR +2: yr2_forecast

  3. YEAR +1: yr1_forecast

  4. YEAR +0: plan

  5. YEAR -1: approp

  6. YEAR -2: obligate

PROJECT APPLICATION: BUDGET ANALYSIS

In [1]:
from IPython.display import Image
from IPython.core.display import HTML 
Image(filename = "data/rnn.png", width=750, height=750)

PHASE 1: PROJECT SETUP

In [2]:
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))

Import the necessary libraries needed for ETL, engineering, and export efforts

In [3]:
import pandas as pd
import csv
import random
import sqlite3
import itertools
import numpy as np
import datetime
import time as t
import getpass as gp

Visualization libraries

In [46]:
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
import plotly.graph_objects as go
import geopandas as gpd 
import descartes
from shapely.geometry import Point, Polygon

Import the required ML & neural net libraries

In [6]:
from scipy import stats

import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler

from tensorflow import keras
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense
from tensorflow.python.keras.wrappers.scikit_learn import KerasRegressor

Declare a function used to return a randomized pandas dataframe

In [7]:
def init_array(df_length):
    '''
    DESCRIPTION: A function to create and return a two_dimensional array with randomized float values
    '''
    length = df_length
    yr3_forecast= np.random.randint(low = 100000, high = 30000000, size = df_length) 
    
    yr2_random = np.random.uniform(low=0.5, high=1.3, size=df_length)
    yr2_forecast = np.round(yr3_forecast * yr2_random,2)
    
    yr1_random = np.random.uniform(low=0.8, high=1.2, size=df_length)
    yr1_forecast = np.round(yr2_forecast * yr1_random,2)
    
    plan_random = np.random.uniform(low=0.6, high=1.3, size=df_length)
    plan_val = np.round(yr1_forecast * plan_random,2)
    
    approp_random = np.random.uniform(low=0.6, high=1.2, size=df_length)
    approp_val = np.round(plan_val * approp_random,2)
    
    oblig_random = np.random.uniform(low=0.8, high=1.0, size=df_length)
    oblig_val = np.round(approp_val * oblig_random,2)
    
    raw_df = pd.DataFrame(columns=['yr+3_forecast','yr+2_forecast','yr+1_forecast','yr0_plan','yr-1_approp','yr-2_oblig'])

    raw_df['yr+3_forecast'] = yr3_forecast
    raw_df['yr+2_forecast'] = yr2_forecast
    raw_df['yr+1_forecast'] = yr1_forecast
    raw_df['yr0_plan'] = plan_val
    raw_df['yr-1_approp'] = approp_val
    raw_df['yr-2_oblig'] = oblig_val
    
    return raw_df

Start the project timer

In [8]:
program_start = t.time()

Set the random seed for the project in order to ensure consistent results

In [9]:
random.seed(6)
username = gp.getpass(prompt='Enter your username:') password = gp.getpass(prompt='Enter your password:') username = "username" password = "password" url = 'https://sample_aws.html' # AWS instance

PHASE 2: DATA ETL

Create random arrays to store the randomized values:

  1. YEAR +3: yr3_forecast
  2. YEAR +2: yr2_forecast
  3. YEAR +1: yr1_forecast
  4. YEAR +0: plan
  5. YEAR -1: approp
  6. YEAR -2: obligate

Create the training array

In [10]:
train_df = init_array(10000)
train_df.tail(10)
Out[10]:
yr+3_forecast yr+2_forecast yr+1_forecast yr0_plan yr-1_approp yr-2_oblig
9990 20434813 23563177.39 26904969.76 23729709.61 25491608.41 21108926.27
9991 27926037 16809441.59 18915058.68 24201447.16 26644127.52 25208022.98
9992 1618575 1752175.61 1840834.72 2211070.55 2067014.11 2024265.77
9993 18880817 23920526.03 19568245.88 22835220.87 15294889.18 13856159.08
9994 26816212 14758303.91 12775583.39 12471248.83 12349823.11 10133775.74
9995 21386305 24832547.84 26948283.57 19102341.53 22348948.45 18389518.70
9996 24165454 18039970.18 17160483.10 22048590.48 19602984.11 18543620.66
9997 4028116 3456266.95 3549303.83 4284947.72 2702130.60 2556707.94
9998 9224388 11086050.70 10945949.79 13968858.21 8910733.23 8115235.68
9999 23854326 23214614.76 19180958.90 17920546.15 18490738.00 15698502.71
In [11]:
train_df.head(5)
Out[11]:
yr+3_forecast yr+2_forecast yr+1_forecast yr0_plan yr-1_approp yr-2_oblig
0 24368663 12641489.99 10454372.44 6503970.24 5371238.51 4543618.35
1 7174880 8181455.30 7888794.45 5943731.35 5308784.39 5060317.67
2 14546264 15906164.03 15645023.26 15675633.19 18452099.33 14935718.16
3 20929842 23670880.26 22147074.31 14970528.59 11089615.96 9895958.69
4 3500592 1940481.73 2252709.23 2511892.90 2956334.65 2408033.50

Create a list of column names which can be used to loop over for future analysis

In [12]:
col_list = list(train_df.columns)
col_list
Out[12]:
['yr+3_forecast',
 'yr+2_forecast',
 'yr+1_forecast',
 'yr0_plan',
 'yr-1_approp',
 'yr-2_oblig']

PHASE 3: DATA EXPLORATION

Create a copy of train_df dataframe

In [13]:
dataset = train_df.copy()
dataset.tail(10)
Out[13]:
yr+3_forecast yr+2_forecast yr+1_forecast yr0_plan yr-1_approp yr-2_oblig
9990 20434813 23563177.39 26904969.76 23729709.61 25491608.41 21108926.27
9991 27926037 16809441.59 18915058.68 24201447.16 26644127.52 25208022.98
9992 1618575 1752175.61 1840834.72 2211070.55 2067014.11 2024265.77
9993 18880817 23920526.03 19568245.88 22835220.87 15294889.18 13856159.08
9994 26816212 14758303.91 12775583.39 12471248.83 12349823.11 10133775.74
9995 21386305 24832547.84 26948283.57 19102341.53 22348948.45 18389518.70
9996 24165454 18039970.18 17160483.10 22048590.48 19602984.11 18543620.66
9997 4028116 3456266.95 3549303.83 4284947.72 2702130.60 2556707.94
9998 9224388 11086050.70 10945949.79 13968858.21 8910733.23 8115235.68
9999 23854326 23214614.76 19180958.90 17920546.15 18490738.00 15698502.71

Plot a box and whisker plot for all of the six variables

In [14]:
import plotly.graph_objects as go
import numpy as np
np.random.seed(1)

y3 = dataset['yr+3_forecast']
y2 = dataset['yr+2_forecast']
y1 = dataset['yr+1_forecast']
plan = dataset['yr0_plan']
approp = dataset['yr-1_approp']
oblig = dataset['yr-2_oblig']

fig = go.Figure()

fig.add_trace(go.Box(x=y2, name = "yr+3_forecast"))
fig.add_trace(go.Box(x=y3, name = "yr+2_forecast"))
fig.add_trace(go.Box(x=y1, name = "yr+1_forecast"))
fig.add_trace(go.Box(x=plan, name = "yr0_plan"))
fig.add_trace(go.Box(x=approp, name = "yr-1_approp"))
fig.add_trace(go.Box(x=oblig, name = "yr-2_oblig"))
fig.show()

Invoke seaborn's pairplot function in order to find any immediate correlation and statistical outliers

In [15]:
budget_pair = train_df[[
 'yr+3_forecast',
 'yr+2_forecast',
 'yr+1_forecast',
 'yr0_plan',
 'yr-1_approp',
 'yr-2_oblig']]
sns.set(style="ticks", color_codes=True)
sns.pairplot(budget_pair)
Out[15]:
<seaborn.axisgrid.PairGrid at 0x1a3a0e8310>

PHASE 4: MODEL DEVELOPMENT

Convert dataframe to numpy arrays

In [16]:
x=dataset.iloc[:, 0:5].to_numpy()
y=dataset.iloc[:,5].to_numpy()
x
Out[16]:
array([[24368663.  , 12641489.99, 10454372.44,  6503970.24,  5371238.51],
       [ 7174880.  ,  8181455.3 ,  7888794.45,  5943731.35,  5308784.39],
       [14546264.  , 15906164.03, 15645023.26, 15675633.19, 18452099.33],
       ...,
       [ 4028116.  ,  3456266.95,  3549303.83,  4284947.72,  2702130.6 ],
       [ 9224388.  , 11086050.7 , 10945949.79, 13968858.21,  8910733.23],
       [23854326.  , 23214614.76, 19180958.9 , 17920546.15, 18490738.  ]])

Reshape the response variable

In [17]:
y=np.reshape(y, (-1,1))
y
Out[17]:
array([[ 4543618.35],
       [ 5060317.67],
       [14935718.16],
       ...,
       [ 2556707.94],
       [ 8115235.68],
       [15698502.71]])

Scale the data from 0 -> 1

In [18]:
scaler_x = MinMaxScaler()
scaler_y = MinMaxScaler()

print(scaler_x.fit(x))
xscale=scaler_x.transform(x)

print(scaler_y.fit(y))
yscale=scaler_y.transform(y)
MinMaxScaler(copy=True, feature_range=(0, 1))
MinMaxScaler(copy=True, feature_range=(0, 1))

Segregate master data to 'train', 'test', 'split'

In [19]:
X_train, X_test, y_train, y_test = train_test_split(xscale, yscale)

Verify the array shape

In [20]:
X_train.shape
Out[20]:
(7500, 5)

y_train is the response variable

In [21]:
y_train.shape
Out[21]:
(7500, 1)

Build the recurrent neural network (RNN)

One input layer for the predictor variables, two hidden layers, and one output node

In [22]:
model = Sequential()
model.add(Dense(10, input_dim=5, kernel_initializer='normal', activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1, activation='linear'))
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 10)                60        
_________________________________________________________________
dense_1 (Dense)              (None, 5)                 55        
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 6         
=================================================================
Total params: 121
Trainable params: 121
Non-trainable params: 0
_________________________________________________________________
In [23]:
model.compile(loss='mse', optimizer='adam', metrics=['mse','mae'])

Fit the RNN model to our data

In [24]:
history = model.fit(X_train, y_train, epochs=250, batch_size=50,  verbose=1, validation_split=0.2)
Train on 6000 samples, validate on 1500 samples
Epoch 1/250
6000/6000 [==============================] - 0s 73us/sample - loss: 0.0103 - mse: 0.0103 - mae: 0.0672 - val_loss: 0.0023 - val_mse: 0.0023 - val_mae: 0.0335
Epoch 2/250
6000/6000 [==============================] - 0s 18us/sample - loss: 0.0015 - mse: 0.0015 - mae: 0.0272 - val_loss: 0.0011 - val_mse: 0.0011 - val_mae: 0.0237
Epoch 3/250
6000/6000 [==============================] - 0s 18us/sample - loss: 7.8679e-04 - mse: 7.8679e-04 - mae: 0.0199 - val_loss: 6.1711e-04 - val_mse: 6.1711e-04 - val_mae: 0.0173
Epoch 4/250
6000/6000 [==============================] - 0s 18us/sample - loss: 5.0949e-04 - mse: 5.0949e-04 - mae: 0.0157 - val_loss: 4.5743e-04 - val_mse: 4.5743e-04 - val_mae: 0.0148
Epoch 5/250
6000/6000 [==============================] - 0s 19us/sample - loss: 4.1729e-04 - mse: 4.1729e-04 - mae: 0.0141 - val_loss: 4.0104e-04 - val_mse: 4.0104e-04 - val_mae: 0.0136
Epoch 6/250
6000/6000 [==============================] - 0s 19us/sample - loss: 3.5990e-04 - mse: 3.5990e-04 - mae: 0.0131 - val_loss: 3.6941e-04 - val_mse: 3.6941e-04 - val_mae: 0.0132
Epoch 7/250
6000/6000 [==============================] - 0s 23us/sample - loss: 3.1938e-04 - mse: 3.1938e-04 - mae: 0.0123 - val_loss: 3.1328e-04 - val_mse: 3.1328e-04 - val_mae: 0.0123
Epoch 8/250
6000/6000 [==============================] - 0s 24us/sample - loss: 2.9293e-04 - mse: 2.9293e-04 - mae: 0.0119 - val_loss: 2.7835e-04 - val_mse: 2.7835e-04 - val_mae: 0.0114
Epoch 9/250
6000/6000 [==============================] - 0s 25us/sample - loss: 2.7221e-04 - mse: 2.7221e-04 - mae: 0.0115 - val_loss: 2.5816e-04 - val_mse: 2.5816e-04 - val_mae: 0.0109
Epoch 10/250
6000/6000 [==============================] - 0s 24us/sample - loss: 2.5187e-04 - mse: 2.5187e-04 - mae: 0.0110 - val_loss: 2.4732e-04 - val_mse: 2.4732e-04 - val_mae: 0.0109
Epoch 11/250
6000/6000 [==============================] - 0s 24us/sample - loss: 2.4171e-04 - mse: 2.4171e-04 - mae: 0.0108 - val_loss: 2.4490e-04 - val_mse: 2.4490e-04 - val_mae: 0.0107
Epoch 12/250
6000/6000 [==============================] - 0s 20us/sample - loss: 2.3985e-04 - mse: 2.3985e-04 - mae: 0.0108 - val_loss: 2.2427e-04 - val_mse: 2.2427e-04 - val_mae: 0.0103
Epoch 13/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2941e-04 - mse: 2.2941e-04 - mae: 0.0106 - val_loss: 2.2470e-04 - val_mse: 2.2470e-04 - val_mae: 0.0104
Epoch 14/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2969e-04 - mse: 2.2969e-04 - mae: 0.0106 - val_loss: 2.2341e-04 - val_mse: 2.2341e-04 - val_mae: 0.0103
Epoch 15/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2342e-04 - mse: 2.2342e-04 - mae: 0.0104 - val_loss: 2.1539e-04 - val_mse: 2.1539e-04 - val_mae: 0.0101
Epoch 16/250
6000/6000 [==============================] - 0s 20us/sample - loss: 2.2881e-04 - mse: 2.2881e-04 - mae: 0.0106 - val_loss: 2.1738e-04 - val_mse: 2.1738e-04 - val_mae: 0.0102
Epoch 17/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2474e-04 - mse: 2.2474e-04 - mae: 0.0105 - val_loss: 2.1258e-04 - val_mse: 2.1258e-04 - val_mae: 0.0101
Epoch 18/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2671e-04 - mse: 2.2671e-04 - mae: 0.0105 - val_loss: 2.1279e-04 - val_mse: 2.1279e-04 - val_mae: 0.0101
Epoch 19/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2287e-04 - mse: 2.2287e-04 - mae: 0.0104 - val_loss: 2.1393e-04 - val_mse: 2.1393e-04 - val_mae: 0.0101
Epoch 20/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2763e-04 - mse: 2.2763e-04 - mae: 0.0106 - val_loss: 2.1735e-04 - val_mse: 2.1735e-04 - val_mae: 0.0101
Epoch 21/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2432e-04 - mse: 2.2432e-04 - mae: 0.0104 - val_loss: 2.1273e-04 - val_mse: 2.1273e-04 - val_mae: 0.0101
Epoch 22/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2353e-04 - mse: 2.2353e-04 - mae: 0.0104 - val_loss: 2.1341e-04 - val_mse: 2.1341e-04 - val_mae: 0.0103
Epoch 23/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2375e-04 - mse: 2.2375e-04 - mae: 0.0105 - val_loss: 2.1786e-04 - val_mse: 2.1786e-04 - val_mae: 0.0103
Epoch 24/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3090e-04 - mse: 2.3090e-04 - mae: 0.0107 - val_loss: 2.1316e-04 - val_mse: 2.1316e-04 - val_mae: 0.0102
Epoch 25/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2825e-04 - mse: 2.2825e-04 - mae: 0.0105 - val_loss: 2.3932e-04 - val_mse: 2.3932e-04 - val_mae: 0.0111
Epoch 26/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.3173e-04 - mse: 2.3173e-04 - mae: 0.0107 - val_loss: 2.1400e-04 - val_mse: 2.1400e-04 - val_mae: 0.0102
Epoch 27/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2219e-04 - mse: 2.2219e-04 - mae: 0.0104 - val_loss: 2.3151e-04 - val_mse: 2.3151e-04 - val_mae: 0.0104
Epoch 28/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2379e-04 - mse: 2.2379e-04 - mae: 0.0104 - val_loss: 2.1543e-04 - val_mse: 2.1543e-04 - val_mae: 0.0105
Epoch 29/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2744e-04 - mse: 2.2744e-04 - mae: 0.0106 - val_loss: 2.2129e-04 - val_mse: 2.2129e-04 - val_mae: 0.0102
Epoch 30/250
6000/6000 [==============================] - 0s 24us/sample - loss: 2.2376e-04 - mse: 2.2376e-04 - mae: 0.0104 - val_loss: 2.1310e-04 - val_mse: 2.1310e-04 - val_mae: 0.0100
Epoch 31/250
6000/6000 [==============================] - 0s 26us/sample - loss: 2.3036e-04 - mse: 2.3036e-04 - mae: 0.0105 - val_loss: 2.3638e-04 - val_mse: 2.3638e-04 - val_mae: 0.0111
Epoch 32/250
6000/6000 [==============================] - 0s 24us/sample - loss: 2.3402e-04 - mse: 2.3402e-04 - mae: 0.0108 - val_loss: 2.1269e-04 - val_mse: 2.1269e-04 - val_mae: 0.0101
Epoch 33/250
6000/6000 [==============================] - 0s 25us/sample - loss: 2.2685e-04 - mse: 2.2685e-04 - mae: 0.0105 - val_loss: 2.1166e-04 - val_mse: 2.1166e-04 - val_mae: 0.0101
Epoch 34/250
6000/6000 [==============================] - 0s 23us/sample - loss: 2.3308e-04 - mse: 2.3308e-04 - mae: 0.0107 - val_loss: 2.1519e-04 - val_mse: 2.1519e-04 - val_mae: 0.0101
Epoch 35/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.3102e-04 - mse: 2.3102e-04 - mae: 0.0106 - val_loss: 2.2157e-04 - val_mse: 2.2157e-04 - val_mae: 0.0106
Epoch 36/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2602e-04 - mse: 2.2602e-04 - mae: 0.0105 - val_loss: 2.2300e-04 - val_mse: 2.2300e-04 - val_mae: 0.0107
Epoch 37/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.3011e-04 - mse: 2.3011e-04 - mae: 0.0105 - val_loss: 2.6094e-04 - val_mse: 2.6094e-04 - val_mae: 0.0114
Epoch 38/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2915e-04 - mse: 2.2915e-04 - mae: 0.0105 - val_loss: 2.5449e-04 - val_mse: 2.5449e-04 - val_mae: 0.0115
Epoch 39/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2972e-04 - mse: 2.2972e-04 - mae: 0.0106 - val_loss: 2.1930e-04 - val_mse: 2.1930e-04 - val_mae: 0.0102
Epoch 40/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2294e-04 - mse: 2.2294e-04 - mae: 0.0104 - val_loss: 2.1959e-04 - val_mse: 2.1959e-04 - val_mae: 0.0102
Epoch 41/250
6000/6000 [==============================] - 0s 17us/sample - loss: 2.3402e-04 - mse: 2.3402e-04 - mae: 0.0107 - val_loss: 2.1294e-04 - val_mse: 2.1294e-04 - val_mae: 0.0101
Epoch 42/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.4712e-04 - mse: 2.4712e-04 - mae: 0.0110 - val_loss: 2.1072e-04 - val_mse: 2.1072e-04 - val_mae: 0.0100
Epoch 43/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2659e-04 - mse: 2.2659e-04 - mae: 0.0105 - val_loss: 2.3800e-04 - val_mse: 2.3800e-04 - val_mae: 0.0109
Epoch 44/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2534e-04 - mse: 2.2534e-04 - mae: 0.0104 - val_loss: 2.1243e-04 - val_mse: 2.1243e-04 - val_mae: 0.0100
Epoch 45/250
6000/6000 [==============================] - 0s 20us/sample - loss: 2.2458e-04 - mse: 2.2458e-04 - mae: 0.0104 - val_loss: 2.2063e-04 - val_mse: 2.2063e-04 - val_mae: 0.0104
Epoch 46/250
6000/6000 [==============================] - 0s 26us/sample - loss: 2.2682e-04 - mse: 2.2682e-04 - mae: 0.0105 - val_loss: 2.1144e-04 - val_mse: 2.1144e-04 - val_mae: 0.0101
Epoch 47/250
6000/6000 [==============================] - 0s 21us/sample - loss: 2.2784e-04 - mse: 2.2784e-04 - mae: 0.0105 - val_loss: 2.4948e-04 - val_mse: 2.4948e-04 - val_mae: 0.0115
Epoch 48/250
6000/6000 [==============================] - 0s 21us/sample - loss: 2.2503e-04 - mse: 2.2503e-04 - mae: 0.0104 - val_loss: 2.1265e-04 - val_mse: 2.1265e-04 - val_mae: 0.0100
Epoch 49/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2813e-04 - mse: 2.2813e-04 - mae: 0.0106 - val_loss: 2.1170e-04 - val_mse: 2.1170e-04 - val_mae: 0.0101
Epoch 50/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3199e-04 - mse: 2.3199e-04 - mae: 0.0106 - val_loss: 2.1145e-04 - val_mse: 2.1145e-04 - val_mae: 0.0100
Epoch 51/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2897e-04 - mse: 2.2897e-04 - mae: 0.0106 - val_loss: 2.3941e-04 - val_mse: 2.3941e-04 - val_mae: 0.0115
Epoch 52/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2276e-04 - mse: 2.2276e-04 - mae: 0.0104 - val_loss: 2.6622e-04 - val_mse: 2.6622e-04 - val_mae: 0.0113
Epoch 53/250
6000/6000 [==============================] - 0s 23us/sample - loss: 2.3519e-04 - mse: 2.3519e-04 - mae: 0.0108 - val_loss: 2.1626e-04 - val_mse: 2.1626e-04 - val_mae: 0.0103
Epoch 54/250
6000/6000 [==============================] - 0s 26us/sample - loss: 2.4580e-04 - mse: 2.4580e-04 - mae: 0.0110 - val_loss: 2.5930e-04 - val_mse: 2.5930e-04 - val_mae: 0.0117
Epoch 55/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2681e-04 - mse: 2.2681e-04 - mae: 0.0105 - val_loss: 2.1079e-04 - val_mse: 2.1079e-04 - val_mae: 0.0100
Epoch 56/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2740e-04 - mse: 2.2740e-04 - mae: 0.0105 - val_loss: 2.2037e-04 - val_mse: 2.2037e-04 - val_mae: 0.0105
Epoch 57/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3390e-04 - mse: 2.3390e-04 - mae: 0.0107 - val_loss: 2.1205e-04 - val_mse: 2.1205e-04 - val_mae: 0.0101
Epoch 58/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2720e-04 - mse: 2.2720e-04 - mae: 0.0105 - val_loss: 2.3505e-04 - val_mse: 2.3505e-04 - val_mae: 0.0110
Epoch 59/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2881e-04 - mse: 2.2881e-04 - mae: 0.0105 - val_loss: 2.5607e-04 - val_mse: 2.5607e-04 - val_mae: 0.0117
Epoch 60/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.4542e-04 - mse: 2.4542e-04 - mae: 0.0110 - val_loss: 2.1063e-04 - val_mse: 2.1063e-04 - val_mae: 0.0100
Epoch 61/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.3401e-04 - mse: 2.3401e-04 - mae: 0.0107 - val_loss: 2.1161e-04 - val_mse: 2.1161e-04 - val_mae: 0.0101
Epoch 62/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2783e-04 - mse: 2.2783e-04 - mae: 0.0106 - val_loss: 2.1257e-04 - val_mse: 2.1257e-04 - val_mae: 0.0103
Epoch 63/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3268e-04 - mse: 2.3268e-04 - mae: 0.0107 - val_loss: 2.2466e-04 - val_mse: 2.2466e-04 - val_mae: 0.0102
Epoch 64/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3230e-04 - mse: 2.3230e-04 - mae: 0.0106 - val_loss: 2.3005e-04 - val_mse: 2.3005e-04 - val_mae: 0.0106
Epoch 65/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3349e-04 - mse: 2.3349e-04 - mae: 0.0107 - val_loss: 2.3507e-04 - val_mse: 2.3507e-04 - val_mae: 0.0108
Epoch 66/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2779e-04 - mse: 2.2779e-04 - mae: 0.0105 - val_loss: 2.2840e-04 - val_mse: 2.2840e-04 - val_mae: 0.0109
Epoch 67/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3076e-04 - mse: 2.3076e-04 - mae: 0.0106 - val_loss: 2.2660e-04 - val_mse: 2.2660e-04 - val_mae: 0.0102
Epoch 68/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2296e-04 - mse: 2.2296e-04 - mae: 0.0104 - val_loss: 2.1655e-04 - val_mse: 2.1655e-04 - val_mae: 0.0102
Epoch 69/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2669e-04 - mse: 2.2669e-04 - mae: 0.0105 - val_loss: 2.2394e-04 - val_mse: 2.2394e-04 - val_mae: 0.0106
Epoch 70/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2288e-04 - mse: 2.2288e-04 - mae: 0.0103 - val_loss: 2.1837e-04 - val_mse: 2.1837e-04 - val_mae: 0.0103
Epoch 71/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2799e-04 - mse: 2.2799e-04 - mae: 0.0106 - val_loss: 2.2411e-04 - val_mse: 2.2411e-04 - val_mae: 0.0104
Epoch 72/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2502e-04 - mse: 2.2502e-04 - mae: 0.0104 - val_loss: 2.1508e-04 - val_mse: 2.1508e-04 - val_mae: 0.0101
Epoch 73/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2472e-04 - mse: 2.2472e-04 - mae: 0.0105 - val_loss: 2.1423e-04 - val_mse: 2.1423e-04 - val_mae: 0.0101
Epoch 74/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2945e-04 - mse: 2.2945e-04 - mae: 0.0106 - val_loss: 2.1657e-04 - val_mse: 2.1657e-04 - val_mae: 0.0102
Epoch 75/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2924e-04 - mse: 2.2924e-04 - mae: 0.0106 - val_loss: 2.1112e-04 - val_mse: 2.1112e-04 - val_mae: 0.0101
Epoch 76/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2521e-04 - mse: 2.2521e-04 - mae: 0.0104 - val_loss: 2.1927e-04 - val_mse: 2.1927e-04 - val_mae: 0.0103
Epoch 77/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2410e-04 - mse: 2.2410e-04 - mae: 0.0104 - val_loss: 2.3618e-04 - val_mse: 2.3618e-04 - val_mae: 0.0108
Epoch 78/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2925e-04 - mse: 2.2925e-04 - mae: 0.0106 - val_loss: 2.2073e-04 - val_mse: 2.2073e-04 - val_mae: 0.0102
Epoch 79/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2944e-04 - mse: 2.2944e-04 - mae: 0.0106 - val_loss: 2.1626e-04 - val_mse: 2.1626e-04 - val_mae: 0.0102
Epoch 80/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2546e-04 - mse: 2.2546e-04 - mae: 0.0104 - val_loss: 2.1086e-04 - val_mse: 2.1086e-04 - val_mae: 0.0100
Epoch 81/250
6000/6000 [==============================] - 0s 17us/sample - loss: 2.2317e-04 - mse: 2.2317e-04 - mae: 0.0104 - val_loss: 2.3119e-04 - val_mse: 2.3119e-04 - val_mae: 0.0109
Epoch 82/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2362e-04 - mse: 2.2362e-04 - mae: 0.0105 - val_loss: 2.1420e-04 - val_mse: 2.1420e-04 - val_mae: 0.0103
Epoch 83/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.3377e-04 - mse: 2.3377e-04 - mae: 0.0107 - val_loss: 2.1886e-04 - val_mse: 2.1886e-04 - val_mae: 0.0103
Epoch 84/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3773e-04 - mse: 2.3773e-04 - mae: 0.0107 - val_loss: 2.1106e-04 - val_mse: 2.1106e-04 - val_mae: 0.0100
Epoch 85/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2275e-04 - mse: 2.2275e-04 - mae: 0.0104 - val_loss: 2.3912e-04 - val_mse: 2.3912e-04 - val_mae: 0.0107
Epoch 86/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2593e-04 - mse: 2.2593e-04 - mae: 0.0105 - val_loss: 2.1998e-04 - val_mse: 2.1998e-04 - val_mae: 0.0101
Epoch 87/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2365e-04 - mse: 2.2365e-04 - mae: 0.0104 - val_loss: 2.1399e-04 - val_mse: 2.1399e-04 - val_mae: 0.0101
Epoch 88/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2682e-04 - mse: 2.2682e-04 - mae: 0.0105 - val_loss: 2.2181e-04 - val_mse: 2.2181e-04 - val_mae: 0.0104
Epoch 89/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3447e-04 - mse: 2.3447e-04 - mae: 0.0107 - val_loss: 2.3775e-04 - val_mse: 2.3775e-04 - val_mae: 0.0109
Epoch 90/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2804e-04 - mse: 2.2804e-04 - mae: 0.0105 - val_loss: 2.1185e-04 - val_mse: 2.1185e-04 - val_mae: 0.0101
Epoch 91/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2559e-04 - mse: 2.2559e-04 - mae: 0.0104 - val_loss: 2.2597e-04 - val_mse: 2.2597e-04 - val_mae: 0.0104
Epoch 92/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2233e-04 - mse: 2.2233e-04 - mae: 0.0104 - val_loss: 2.1729e-04 - val_mse: 2.1729e-04 - val_mae: 0.0105
Epoch 93/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2597e-04 - mse: 2.2597e-04 - mae: 0.0104 - val_loss: 2.1106e-04 - val_mse: 2.1106e-04 - val_mae: 0.0100
Epoch 94/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2538e-04 - mse: 2.2538e-04 - mae: 0.0104 - val_loss: 2.1190e-04 - val_mse: 2.1190e-04 - val_mae: 0.0101
Epoch 95/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2601e-04 - mse: 2.2601e-04 - mae: 0.0105 - val_loss: 2.1380e-04 - val_mse: 2.1380e-04 - val_mae: 0.0101
Epoch 96/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2291e-04 - mse: 2.2291e-04 - mae: 0.0104 - val_loss: 2.5420e-04 - val_mse: 2.5420e-04 - val_mae: 0.0118
Epoch 97/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2413e-04 - mse: 2.2413e-04 - mae: 0.0104 - val_loss: 2.1062e-04 - val_mse: 2.1062e-04 - val_mae: 0.0100
Epoch 98/250
6000/6000 [==============================] - 0s 20us/sample - loss: 2.2851e-04 - mse: 2.2851e-04 - mae: 0.0105 - val_loss: 2.5732e-04 - val_mse: 2.5732e-04 - val_mae: 0.0112
Epoch 99/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2359e-04 - mse: 2.2359e-04 - mae: 0.0104 - val_loss: 2.6497e-04 - val_mse: 2.6497e-04 - val_mae: 0.0115
Epoch 100/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2925e-04 - mse: 2.2925e-04 - mae: 0.0106 - val_loss: 2.1228e-04 - val_mse: 2.1228e-04 - val_mae: 0.0101
Epoch 101/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2806e-04 - mse: 2.2806e-04 - mae: 0.0105 - val_loss: 2.3385e-04 - val_mse: 2.3385e-04 - val_mae: 0.0106
Epoch 102/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.3160e-04 - mse: 2.3160e-04 - mae: 0.0107 - val_loss: 2.3355e-04 - val_mse: 2.3355e-04 - val_mae: 0.0108
Epoch 103/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3345e-04 - mse: 2.3345e-04 - mae: 0.0107 - val_loss: 2.4238e-04 - val_mse: 2.4238e-04 - val_mae: 0.0110
Epoch 104/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2217e-04 - mse: 2.2217e-04 - mae: 0.0103 - val_loss: 2.1074e-04 - val_mse: 2.1074e-04 - val_mae: 0.0100
Epoch 105/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2616e-04 - mse: 2.2616e-04 - mae: 0.0105 - val_loss: 2.1168e-04 - val_mse: 2.1168e-04 - val_mae: 0.0100
Epoch 106/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2856e-04 - mse: 2.2856e-04 - mae: 0.0106 - val_loss: 2.4593e-04 - val_mse: 2.4593e-04 - val_mae: 0.0107
Epoch 107/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2403e-04 - mse: 2.2403e-04 - mae: 0.0104 - val_loss: 2.1940e-04 - val_mse: 2.1940e-04 - val_mae: 0.0105
Epoch 108/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2678e-04 - mse: 2.2678e-04 - mae: 0.0104 - val_loss: 2.1352e-04 - val_mse: 2.1352e-04 - val_mae: 0.0100
Epoch 109/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2743e-04 - mse: 2.2743e-04 - mae: 0.0104 - val_loss: 2.1738e-04 - val_mse: 2.1738e-04 - val_mae: 0.0102
Epoch 110/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2824e-04 - mse: 2.2824e-04 - mae: 0.0105 - val_loss: 2.3464e-04 - val_mse: 2.3464e-04 - val_mae: 0.0107
Epoch 111/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3822e-04 - mse: 2.3822e-04 - mae: 0.0108 - val_loss: 2.1181e-04 - val_mse: 2.1181e-04 - val_mae: 0.0101
Epoch 112/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2401e-04 - mse: 2.2401e-04 - mae: 0.0104 - val_loss: 2.1631e-04 - val_mse: 2.1631e-04 - val_mae: 0.0102
Epoch 113/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2764e-04 - mse: 2.2764e-04 - mae: 0.0105 - val_loss: 2.1088e-04 - val_mse: 2.1088e-04 - val_mae: 0.0100
Epoch 114/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2287e-04 - mse: 2.2287e-04 - mae: 0.0104 - val_loss: 2.1066e-04 - val_mse: 2.1066e-04 - val_mae: 0.0100
Epoch 115/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.3817e-04 - mse: 2.3817e-04 - mae: 0.0109 - val_loss: 2.1594e-04 - val_mse: 2.1594e-04 - val_mae: 0.0101
Epoch 116/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2358e-04 - mse: 2.2358e-04 - mae: 0.0104 - val_loss: 2.1067e-04 - val_mse: 2.1067e-04 - val_mae: 0.0100
Epoch 117/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2234e-04 - mse: 2.2234e-04 - mae: 0.0104 - val_loss: 2.1345e-04 - val_mse: 2.1345e-04 - val_mae: 0.0103
Epoch 118/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2746e-04 - mse: 2.2746e-04 - mae: 0.0105 - val_loss: 2.1370e-04 - val_mse: 2.1370e-04 - val_mae: 0.0101
Epoch 119/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2138e-04 - mse: 2.2138e-04 - mae: 0.0104 - val_loss: 2.1490e-04 - val_mse: 2.1490e-04 - val_mae: 0.0103
Epoch 120/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2318e-04 - mse: 2.2318e-04 - mae: 0.0103 - val_loss: 2.1326e-04 - val_mse: 2.1326e-04 - val_mae: 0.0101
Epoch 121/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3011e-04 - mse: 2.3011e-04 - mae: 0.0106 - val_loss: 2.1117e-04 - val_mse: 2.1117e-04 - val_mae: 0.0100
Epoch 122/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3164e-04 - mse: 2.3164e-04 - mae: 0.0106 - val_loss: 2.1247e-04 - val_mse: 2.1247e-04 - val_mae: 0.0102
Epoch 123/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2533e-04 - mse: 2.2533e-04 - mae: 0.0105 - val_loss: 2.3027e-04 - val_mse: 2.3027e-04 - val_mae: 0.0108
Epoch 124/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2649e-04 - mse: 2.2649e-04 - mae: 0.0106 - val_loss: 2.2043e-04 - val_mse: 2.2043e-04 - val_mae: 0.0104
Epoch 125/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2428e-04 - mse: 2.2428e-04 - mae: 0.0104 - val_loss: 2.1385e-04 - val_mse: 2.1385e-04 - val_mae: 0.0102
Epoch 126/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3022e-04 - mse: 2.3022e-04 - mae: 0.0106 - val_loss: 2.1997e-04 - val_mse: 2.1997e-04 - val_mae: 0.0107
Epoch 127/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2603e-04 - mse: 2.2603e-04 - mae: 0.0104 - val_loss: 2.1325e-04 - val_mse: 2.1325e-04 - val_mae: 0.0102
Epoch 128/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3002e-04 - mse: 2.3002e-04 - mae: 0.0107 - val_loss: 2.6253e-04 - val_mse: 2.6253e-04 - val_mae: 0.0117
Epoch 129/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2628e-04 - mse: 2.2628e-04 - mae: 0.0105 - val_loss: 2.3197e-04 - val_mse: 2.3197e-04 - val_mae: 0.0106
Epoch 130/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2897e-04 - mse: 2.2897e-04 - mae: 0.0106 - val_loss: 2.1486e-04 - val_mse: 2.1486e-04 - val_mae: 0.0101
Epoch 131/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2332e-04 - mse: 2.2332e-04 - mae: 0.0104 - val_loss: 2.2886e-04 - val_mse: 2.2886e-04 - val_mae: 0.0104
Epoch 132/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2401e-04 - mse: 2.2401e-04 - mae: 0.0104 - val_loss: 2.1061e-04 - val_mse: 2.1061e-04 - val_mae: 0.0100
Epoch 133/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2419e-04 - mse: 2.2418e-04 - mae: 0.0104 - val_loss: 2.4834e-04 - val_mse: 2.4834e-04 - val_mae: 0.0108
Epoch 134/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3055e-04 - mse: 2.3055e-04 - mae: 0.0106 - val_loss: 2.1846e-04 - val_mse: 2.1846e-04 - val_mae: 0.0102
Epoch 135/250
6000/6000 [==============================] - 0s 17us/sample - loss: 2.2426e-04 - mse: 2.2426e-04 - mae: 0.0104 - val_loss: 2.1233e-04 - val_mse: 2.1233e-04 - val_mae: 0.0101
Epoch 136/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2733e-04 - mse: 2.2733e-04 - mae: 0.0105 - val_loss: 2.1358e-04 - val_mse: 2.1358e-04 - val_mae: 0.0101
Epoch 137/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2840e-04 - mse: 2.2840e-04 - mae: 0.0105 - val_loss: 2.1033e-04 - val_mse: 2.1033e-04 - val_mae: 0.0100
Epoch 138/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2630e-04 - mse: 2.2630e-04 - mae: 0.0104 - val_loss: 2.1457e-04 - val_mse: 2.1457e-04 - val_mae: 0.0101
Epoch 139/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2401e-04 - mse: 2.2401e-04 - mae: 0.0104 - val_loss: 2.1428e-04 - val_mse: 2.1428e-04 - val_mae: 0.0102
Epoch 140/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2781e-04 - mse: 2.2781e-04 - mae: 0.0105 - val_loss: 2.2288e-04 - val_mse: 2.2288e-04 - val_mae: 0.0104
Epoch 141/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2298e-04 - mse: 2.2298e-04 - mae: 0.0104 - val_loss: 2.1274e-04 - val_mse: 2.1274e-04 - val_mae: 0.0100
Epoch 142/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2121e-04 - mse: 2.2121e-04 - mae: 0.0103 - val_loss: 2.5779e-04 - val_mse: 2.5779e-04 - val_mae: 0.0114
Epoch 143/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2383e-04 - mse: 2.2383e-04 - mae: 0.0104 - val_loss: 2.2417e-04 - val_mse: 2.2417e-04 - val_mae: 0.0106
Epoch 144/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2802e-04 - mse: 2.2802e-04 - mae: 0.0106 - val_loss: 2.1359e-04 - val_mse: 2.1359e-04 - val_mae: 0.0101
Epoch 145/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2636e-04 - mse: 2.2636e-04 - mae: 0.0105 - val_loss: 2.1896e-04 - val_mse: 2.1896e-04 - val_mae: 0.0101
Epoch 146/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2992e-04 - mse: 2.2992e-04 - mae: 0.0106 - val_loss: 2.6278e-04 - val_mse: 2.6278e-04 - val_mae: 0.0114
Epoch 147/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3061e-04 - mse: 2.3061e-04 - mae: 0.0106 - val_loss: 2.1094e-04 - val_mse: 2.1094e-04 - val_mae: 0.0100
Epoch 148/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3017e-04 - mse: 2.3017e-04 - mae: 0.0105 - val_loss: 2.5427e-04 - val_mse: 2.5427e-04 - val_mae: 0.0115
Epoch 149/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2753e-04 - mse: 2.2753e-04 - mae: 0.0105 - val_loss: 2.1743e-04 - val_mse: 2.1743e-04 - val_mae: 0.0103
Epoch 150/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2163e-04 - mse: 2.2163e-04 - mae: 0.0104 - val_loss: 2.1453e-04 - val_mse: 2.1453e-04 - val_mae: 0.0103
Epoch 151/250
6000/6000 [==============================] - 0s 17us/sample - loss: 2.2603e-04 - mse: 2.2603e-04 - mae: 0.0105 - val_loss: 2.1177e-04 - val_mse: 2.1177e-04 - val_mae: 0.0101
Epoch 152/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2928e-04 - mse: 2.2928e-04 - mae: 0.0105 - val_loss: 2.1154e-04 - val_mse: 2.1154e-04 - val_mae: 0.0100
Epoch 153/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2026e-04 - mse: 2.2026e-04 - mae: 0.0103 - val_loss: 2.1390e-04 - val_mse: 2.1390e-04 - val_mae: 0.0104
Epoch 154/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2540e-04 - mse: 2.2540e-04 - mae: 0.0104 - val_loss: 2.1834e-04 - val_mse: 2.1834e-04 - val_mae: 0.0102
Epoch 155/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2458e-04 - mse: 2.2458e-04 - mae: 0.0104 - val_loss: 2.4522e-04 - val_mse: 2.4522e-04 - val_mae: 0.0113
Epoch 156/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2377e-04 - mse: 2.2377e-04 - mae: 0.0104 - val_loss: 2.2300e-04 - val_mse: 2.2300e-04 - val_mae: 0.0104
Epoch 157/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.3564e-04 - mse: 2.3564e-04 - mae: 0.0108 - val_loss: 2.1466e-04 - val_mse: 2.1466e-04 - val_mae: 0.0103
Epoch 158/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2494e-04 - mse: 2.2494e-04 - mae: 0.0104 - val_loss: 2.1053e-04 - val_mse: 2.1053e-04 - val_mae: 0.0100
Epoch 159/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3233e-04 - mse: 2.3233e-04 - mae: 0.0106 - val_loss: 2.1328e-04 - val_mse: 2.1328e-04 - val_mae: 0.0103
Epoch 160/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.3104e-04 - mse: 2.3104e-04 - mae: 0.0107 - val_loss: 2.3375e-04 - val_mse: 2.3375e-04 - val_mae: 0.0108
Epoch 161/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3631e-04 - mse: 2.3631e-04 - mae: 0.0107 - val_loss: 2.1280e-04 - val_mse: 2.1280e-04 - val_mae: 0.0100
Epoch 162/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2210e-04 - mse: 2.2210e-04 - mae: 0.0103 - val_loss: 2.5190e-04 - val_mse: 2.5190e-04 - val_mae: 0.0115
Epoch 163/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2690e-04 - mse: 2.2690e-04 - mae: 0.0105 - val_loss: 2.1442e-04 - val_mse: 2.1442e-04 - val_mae: 0.0103
Epoch 164/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2400e-04 - mse: 2.2400e-04 - mae: 0.0104 - val_loss: 2.3184e-04 - val_mse: 2.3184e-04 - val_mae: 0.0104
Epoch 165/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2548e-04 - mse: 2.2548e-04 - mae: 0.0105 - val_loss: 2.1244e-04 - val_mse: 2.1244e-04 - val_mae: 0.0100
Epoch 166/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.3073e-04 - mse: 2.3073e-04 - mae: 0.0106 - val_loss: 2.2155e-04 - val_mse: 2.2155e-04 - val_mae: 0.0103
Epoch 167/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2526e-04 - mse: 2.2526e-04 - mae: 0.0105 - val_loss: 2.2884e-04 - val_mse: 2.2884e-04 - val_mae: 0.0105
Epoch 168/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2466e-04 - mse: 2.2466e-04 - mae: 0.0104 - val_loss: 2.1873e-04 - val_mse: 2.1873e-04 - val_mae: 0.0105
Epoch 169/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2485e-04 - mse: 2.2485e-04 - mae: 0.0104 - val_loss: 2.1316e-04 - val_mse: 2.1316e-04 - val_mae: 0.0102
Epoch 170/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2686e-04 - mse: 2.2686e-04 - mae: 0.0105 - val_loss: 2.1168e-04 - val_mse: 2.1168e-04 - val_mae: 0.0101
Epoch 171/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2773e-04 - mse: 2.2773e-04 - mae: 0.0105 - val_loss: 2.1864e-04 - val_mse: 2.1864e-04 - val_mae: 0.0103
Epoch 172/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2352e-04 - mse: 2.2352e-04 - mae: 0.0104 - val_loss: 2.1696e-04 - val_mse: 2.1696e-04 - val_mae: 0.0102
Epoch 173/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2454e-04 - mse: 2.2454e-04 - mae: 0.0104 - val_loss: 2.3519e-04 - val_mse: 2.3519e-04 - val_mae: 0.0110
Epoch 174/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2657e-04 - mse: 2.2657e-04 - mae: 0.0104 - val_loss: 2.1438e-04 - val_mse: 2.1438e-04 - val_mae: 0.0101
Epoch 175/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2569e-04 - mse: 2.2569e-04 - mae: 0.0104 - val_loss: 2.5655e-04 - val_mse: 2.5655e-04 - val_mae: 0.0115
Epoch 176/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2406e-04 - mse: 2.2406e-04 - mae: 0.0104 - val_loss: 2.1398e-04 - val_mse: 2.1398e-04 - val_mae: 0.0101
Epoch 177/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.3029e-04 - mse: 2.3029e-04 - mae: 0.0106 - val_loss: 2.2517e-04 - val_mse: 2.2517e-04 - val_mae: 0.0105
Epoch 178/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2498e-04 - mse: 2.2498e-04 - mae: 0.0104 - val_loss: 2.1440e-04 - val_mse: 2.1440e-04 - val_mae: 0.0103
Epoch 179/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3806e-04 - mse: 2.3806e-04 - mae: 0.0108 - val_loss: 2.1230e-04 - val_mse: 2.1230e-04 - val_mae: 0.0100
Epoch 180/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2834e-04 - mse: 2.2834e-04 - mae: 0.0105 - val_loss: 2.9368e-04 - val_mse: 2.9368e-04 - val_mae: 0.0126
Epoch 181/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2773e-04 - mse: 2.2773e-04 - mae: 0.0105 - val_loss: 2.1492e-04 - val_mse: 2.1492e-04 - val_mae: 0.0101
Epoch 182/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2345e-04 - mse: 2.2345e-04 - mae: 0.0104 - val_loss: 2.1835e-04 - val_mse: 2.1835e-04 - val_mae: 0.0107
Epoch 183/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2653e-04 - mse: 2.2653e-04 - mae: 0.0105 - val_loss: 2.1100e-04 - val_mse: 2.1100e-04 - val_mae: 0.0100
Epoch 184/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2592e-04 - mse: 2.2592e-04 - mae: 0.0105 - val_loss: 2.2343e-04 - val_mse: 2.2343e-04 - val_mae: 0.0103
Epoch 185/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2320e-04 - mse: 2.2320e-04 - mae: 0.0104 - val_loss: 2.1345e-04 - val_mse: 2.1345e-04 - val_mae: 0.0101
Epoch 186/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2697e-04 - mse: 2.2697e-04 - mae: 0.0106 - val_loss: 2.1414e-04 - val_mse: 2.1414e-04 - val_mae: 0.0102
Epoch 187/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3903e-04 - mse: 2.3903e-04 - mae: 0.0109 - val_loss: 2.1738e-04 - val_mse: 2.1738e-04 - val_mae: 0.0102
Epoch 188/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3059e-04 - mse: 2.3059e-04 - mae: 0.0107 - val_loss: 2.1739e-04 - val_mse: 2.1739e-04 - val_mae: 0.0103
Epoch 189/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2253e-04 - mse: 2.2253e-04 - mae: 0.0104 - val_loss: 2.2230e-04 - val_mse: 2.2230e-04 - val_mae: 0.0103
Epoch 190/250
6000/6000 [==============================] - 0s 17us/sample - loss: 2.2905e-04 - mse: 2.2905e-04 - mae: 0.0105 - val_loss: 2.1252e-04 - val_mse: 2.1252e-04 - val_mae: 0.0100
Epoch 191/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2292e-04 - mse: 2.2292e-04 - mae: 0.0104 - val_loss: 2.1257e-04 - val_mse: 2.1257e-04 - val_mae: 0.0100
Epoch 192/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2126e-04 - mse: 2.2126e-04 - mae: 0.0103 - val_loss: 2.1621e-04 - val_mse: 2.1621e-04 - val_mae: 0.0102
Epoch 193/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2333e-04 - mse: 2.2333e-04 - mae: 0.0104 - val_loss: 2.1448e-04 - val_mse: 2.1448e-04 - val_mae: 0.0101
Epoch 194/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.1998e-04 - mse: 2.1998e-04 - mae: 0.0103 - val_loss: 2.2254e-04 - val_mse: 2.2254e-04 - val_mae: 0.0105
Epoch 195/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2488e-04 - mse: 2.2488e-04 - mae: 0.0105 - val_loss: 2.1814e-04 - val_mse: 2.1814e-04 - val_mae: 0.0101
Epoch 196/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2671e-04 - mse: 2.2671e-04 - mae: 0.0105 - val_loss: 2.1570e-04 - val_mse: 2.1570e-04 - val_mae: 0.0103
Epoch 197/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2220e-04 - mse: 2.2220e-04 - mae: 0.0103 - val_loss: 2.1063e-04 - val_mse: 2.1063e-04 - val_mae: 0.0100
Epoch 198/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2371e-04 - mse: 2.2371e-04 - mae: 0.0104 - val_loss: 2.2403e-04 - val_mse: 2.2403e-04 - val_mae: 0.0106
Epoch 199/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3061e-04 - mse: 2.3061e-04 - mae: 0.0106 - val_loss: 2.1795e-04 - val_mse: 2.1795e-04 - val_mae: 0.0102
Epoch 200/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2452e-04 - mse: 2.2452e-04 - mae: 0.0105 - val_loss: 2.1159e-04 - val_mse: 2.1159e-04 - val_mae: 0.0101
Epoch 201/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2817e-04 - mse: 2.2817e-04 - mae: 0.0106 - val_loss: 2.3211e-04 - val_mse: 2.3211e-04 - val_mae: 0.0105
Epoch 202/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2539e-04 - mse: 2.2539e-04 - mae: 0.0105 - val_loss: 2.1912e-04 - val_mse: 2.1912e-04 - val_mae: 0.0105
Epoch 203/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2698e-04 - mse: 2.2698e-04 - mae: 0.0105 - val_loss: 2.3503e-04 - val_mse: 2.3503e-04 - val_mae: 0.0110
Epoch 204/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2509e-04 - mse: 2.2509e-04 - mae: 0.0104 - val_loss: 2.1563e-04 - val_mse: 2.1563e-04 - val_mae: 0.0103
Epoch 205/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2002e-04 - mse: 2.2002e-04 - mae: 0.0103 - val_loss: 2.2897e-04 - val_mse: 2.2897e-04 - val_mae: 0.0105
Epoch 206/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.3049e-04 - mse: 2.3049e-04 - mae: 0.0106 - val_loss: 2.1292e-04 - val_mse: 2.1292e-04 - val_mae: 0.0101
Epoch 207/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2669e-04 - mse: 2.2669e-04 - mae: 0.0105 - val_loss: 2.1491e-04 - val_mse: 2.1491e-04 - val_mae: 0.0101
Epoch 208/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2717e-04 - mse: 2.2717e-04 - mae: 0.0105 - val_loss: 2.1351e-04 - val_mse: 2.1351e-04 - val_mae: 0.0102
Epoch 209/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2432e-04 - mse: 2.2432e-04 - mae: 0.0104 - val_loss: 2.1774e-04 - val_mse: 2.1774e-04 - val_mae: 0.0101
Epoch 210/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2541e-04 - mse: 2.2541e-04 - mae: 0.0105 - val_loss: 2.1106e-04 - val_mse: 2.1106e-04 - val_mae: 0.0100
Epoch 211/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2888e-04 - mse: 2.2888e-04 - mae: 0.0106 - val_loss: 2.4993e-04 - val_mse: 2.4993e-04 - val_mae: 0.0113
Epoch 212/250
6000/6000 [==============================] - 0s 20us/sample - loss: 2.2688e-04 - mse: 2.2688e-04 - mae: 0.0105 - val_loss: 2.1945e-04 - val_mse: 2.1945e-04 - val_mae: 0.0104
Epoch 213/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.3403e-04 - mse: 2.3403e-04 - mae: 0.0107 - val_loss: 2.1275e-04 - val_mse: 2.1275e-04 - val_mae: 0.0102
Epoch 214/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2605e-04 - mse: 2.2605e-04 - mae: 0.0105 - val_loss: 2.4611e-04 - val_mse: 2.4611e-04 - val_mae: 0.0109
Epoch 215/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2553e-04 - mse: 2.2553e-04 - mae: 0.0105 - val_loss: 2.1662e-04 - val_mse: 2.1662e-04 - val_mae: 0.0101
Epoch 216/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2354e-04 - mse: 2.2354e-04 - mae: 0.0104 - val_loss: 2.1366e-04 - val_mse: 2.1366e-04 - val_mae: 0.0101
Epoch 217/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2621e-04 - mse: 2.2621e-04 - mae: 0.0105 - val_loss: 2.1124e-04 - val_mse: 2.1124e-04 - val_mae: 0.0101
Epoch 218/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2086e-04 - mse: 2.2086e-04 - mae: 0.0103 - val_loss: 2.2604e-04 - val_mse: 2.2604e-04 - val_mae: 0.0103
Epoch 219/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.3641e-04 - mse: 2.3641e-04 - mae: 0.0108 - val_loss: 2.3343e-04 - val_mse: 2.3343e-04 - val_mae: 0.0106
Epoch 220/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2614e-04 - mse: 2.2614e-04 - mae: 0.0105 - val_loss: 2.1800e-04 - val_mse: 2.1800e-04 - val_mae: 0.0107
Epoch 221/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2668e-04 - mse: 2.2668e-04 - mae: 0.0105 - val_loss: 2.3521e-04 - val_mse: 2.3521e-04 - val_mae: 0.0108
Epoch 222/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.3500e-04 - mse: 2.3500e-04 - mae: 0.0107 - val_loss: 2.3394e-04 - val_mse: 2.3394e-04 - val_mae: 0.0109
Epoch 223/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3591e-04 - mse: 2.3591e-04 - mae: 0.0108 - val_loss: 2.1145e-04 - val_mse: 2.1145e-04 - val_mae: 0.0100
Epoch 224/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2471e-04 - mse: 2.2471e-04 - mae: 0.0105 - val_loss: 2.1110e-04 - val_mse: 2.1110e-04 - val_mae: 0.0100
Epoch 225/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2331e-04 - mse: 2.2331e-04 - mae: 0.0104 - val_loss: 2.3164e-04 - val_mse: 2.3164e-04 - val_mae: 0.0111
Epoch 226/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3459e-04 - mse: 2.3459e-04 - mae: 0.0108 - val_loss: 2.1069e-04 - val_mse: 2.1069e-04 - val_mae: 0.0100
Epoch 227/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2631e-04 - mse: 2.2631e-04 - mae: 0.0105 - val_loss: 2.1558e-04 - val_mse: 2.1558e-04 - val_mae: 0.0100
Epoch 228/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2482e-04 - mse: 2.2482e-04 - mae: 0.0104 - val_loss: 2.1355e-04 - val_mse: 2.1355e-04 - val_mae: 0.0102
Epoch 229/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2375e-04 - mse: 2.2375e-04 - mae: 0.0104 - val_loss: 2.5218e-04 - val_mse: 2.5218e-04 - val_mae: 0.0109
Epoch 230/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2679e-04 - mse: 2.2679e-04 - mae: 0.0105 - val_loss: 2.1168e-04 - val_mse: 2.1168e-04 - val_mae: 0.0100
Epoch 231/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2161e-04 - mse: 2.2161e-04 - mae: 0.0104 - val_loss: 2.1174e-04 - val_mse: 2.1174e-04 - val_mae: 0.0101
Epoch 232/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2314e-04 - mse: 2.2314e-04 - mae: 0.0104 - val_loss: 2.4757e-04 - val_mse: 2.4757e-04 - val_mae: 0.0109
Epoch 233/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.3201e-04 - mse: 2.3201e-04 - mae: 0.0106 - val_loss: 2.2207e-04 - val_mse: 2.2207e-04 - val_mae: 0.0103
Epoch 234/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2543e-04 - mse: 2.2543e-04 - mae: 0.0105 - val_loss: 2.1147e-04 - val_mse: 2.1147e-04 - val_mae: 0.0100
Epoch 235/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2344e-04 - mse: 2.2344e-04 - mae: 0.0104 - val_loss: 2.1419e-04 - val_mse: 2.1419e-04 - val_mae: 0.0101
Epoch 236/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2365e-04 - mse: 2.2365e-04 - mae: 0.0104 - val_loss: 2.5838e-04 - val_mse: 2.5838e-04 - val_mae: 0.0116
Epoch 237/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2404e-04 - mse: 2.2404e-04 - mae: 0.0104 - val_loss: 2.1413e-04 - val_mse: 2.1413e-04 - val_mae: 0.0101
Epoch 238/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2830e-04 - mse: 2.2830e-04 - mae: 0.0105 - val_loss: 2.1688e-04 - val_mse: 2.1688e-04 - val_mae: 0.0104
Epoch 239/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2829e-04 - mse: 2.2829e-04 - mae: 0.0106 - val_loss: 2.1381e-04 - val_mse: 2.1381e-04 - val_mae: 0.0102
Epoch 240/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2482e-04 - mse: 2.2482e-04 - mae: 0.0104 - val_loss: 2.1196e-04 - val_mse: 2.1196e-04 - val_mae: 0.0101
Epoch 241/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2304e-04 - mse: 2.2304e-04 - mae: 0.0104 - val_loss: 2.1886e-04 - val_mse: 2.1886e-04 - val_mae: 0.0103
Epoch 242/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2486e-04 - mse: 2.2486e-04 - mae: 0.0105 - val_loss: 2.3223e-04 - val_mse: 2.3223e-04 - val_mae: 0.0112
Epoch 243/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2942e-04 - mse: 2.2942e-04 - mae: 0.0106 - val_loss: 2.2098e-04 - val_mse: 2.2098e-04 - val_mae: 0.0103
Epoch 244/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2520e-04 - mse: 2.2520e-04 - mae: 0.0105 - val_loss: 2.1814e-04 - val_mse: 2.1814e-04 - val_mae: 0.0103
Epoch 245/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2200e-04 - mse: 2.2200e-04 - mae: 0.0103 - val_loss: 2.1868e-04 - val_mse: 2.1868e-04 - val_mae: 0.0102
Epoch 246/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2493e-04 - mse: 2.2493e-04 - mae: 0.0104 - val_loss: 2.1792e-04 - val_mse: 2.1792e-04 - val_mae: 0.0102
Epoch 247/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2722e-04 - mse: 2.2722e-04 - mae: 0.0105 - val_loss: 2.2106e-04 - val_mse: 2.2106e-04 - val_mae: 0.0105
Epoch 248/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2954e-04 - mse: 2.2954e-04 - mae: 0.0106 - val_loss: 2.1213e-04 - val_mse: 2.1213e-04 - val_mae: 0.0102
Epoch 249/250
6000/6000 [==============================] - 0s 19us/sample - loss: 2.2504e-04 - mse: 2.2504e-04 - mae: 0.0104 - val_loss: 2.1073e-04 - val_mse: 2.1073e-04 - val_mae: 0.0100
Epoch 250/250
6000/6000 [==============================] - 0s 18us/sample - loss: 2.2291e-04 - mse: 2.2291e-04 - mae: 0.0105 - val_loss: 2.1524e-04 - val_mse: 2.1524e-04 - val_mae: 0.0101

Plot the Loss Test

In [25]:
fig = go.Figure()
fig.add_trace(go.Scatter(y=history.history['loss'],
                    mode='lines',
                    name='Train'))
fig.add_trace(go.Scatter(y=history.history['val_loss'],
                    mode='lines+markers',
                    name='Validation'))
fig.update_layout(
    autosize=False,
    width=1500,
    height=750,
    title = "Train vs. Validation Loss Test",
    xaxis=dict(
        title_text="No. of epochs",
        titlefont=dict(size=20),
    ),


    yaxis=dict(
        title_text="Loss Value",
        titlefont=dict(size=20),
    )

)

fig.show()

Predict Values

Create a new array with dummy data and test the model's effeftiveness against it

In [26]:
predict_full = init_array(25000)
valid_df = predict_full.iloc[:,:-1]
valid_df.tail(100)
Out[26]:
yr+3_forecast yr+2_forecast yr+1_forecast yr0_plan yr-1_approp
24900 10263133 10694460.21 10555266.29 10628706.27 9306840.00
24901 15306374 13977275.71 11544916.77 14574419.88 13305132.28
24902 13227973 9203546.36 8030113.15 9720114.49 5971394.17
24903 20214368 23315784.23 20072880.43 18394861.16 20000290.80
24904 18235171 19087484.55 21293489.78 21766875.29 13861136.95
... ... ... ... ... ...
24995 9788547 6019195.88 6621337.92 6614968.31 4550778.85
24996 16761939 11540837.92 10632601.71 8565822.45 5608820.13
24997 29827735 36874254.12 36124075.74 42167021.18 43979117.69
24998 16921721 16218709.31 13533193.47 10013329.68 8468636.24
24999 23722133 14329118.18 14532372.60 14002597.92 8792018.72

100 rows × 5 columns

Convert the dataframe to a two dimensional numpy array

In [27]:
valid_array = valid_df.to_numpy()
valid_array
Out[27]:
array([[ 4965740.  ,  3487684.27,  4014235.53,  4725871.01,  5234624.84],
       [18101446.  , 19994604.3 , 23624533.34, 22314899.04, 17000731.86],
       [ 7845234.  ,  5754059.4 ,  5290432.94,  4052856.29,  2769045.32],
       ...,
       [29827735.  , 36874254.12, 36124075.74, 42167021.18, 43979117.69],
       [16921721.  , 16218709.31, 13533193.47, 10013329.68,  8468636.24],
       [23722133.  , 14329118.18, 14532372.6 , 14002597.92,  8792018.72]])

Validate the shape of the newly created array

In [28]:
valid_array.shape
Out[28]:
(25000, 5)

Predict all response values for each of the 25,000 records

In [29]:
predict_val = model.predict(valid_array)
predict_val
Out[29]:
array([[ 4470589.5],
       [15786764. ],
       [ 2593616.8],
       ...,
       [37625132. ],
       [ 7903730. ],
       [ 8205492. ]], dtype=float32)

Merge the numpy predictor array as a standalone column to the predict_full dataframe

In [30]:
predict_full['predict_values'] = predict_val
predict_full.tail(10)
Out[30]:
yr+3_forecast yr+2_forecast yr+1_forecast yr0_plan yr-1_approp yr-2_oblig predict_values
24990 8666512 7461156.43 8822194.01 9255574.82 8259746.18 7141320.50 7424556.0
24991 24778892 13605874.65 10933862.15 7503179.29 6512696.27 5535517.31 6108544.0
24992 11527898 10119517.86 10636454.97 9449040.98 9744862.91 8841741.56 8883496.0
24993 4823458 4248480.93 4070352.40 3602277.70 3395571.19 2768099.78 3160810.5
24994 26754281 17794607.35 17997860.27 10823436.31 6961458.78 6777291.92 6659742.0
24995 9788547 6019195.88 6621337.92 6614968.31 4550778.85 4380361.39 4239308.5
24996 16761939 11540837.92 10632601.71 8565822.45 5608820.13 4736385.57 5273779.0
24997 29827735 36874254.12 36124075.74 42167021.18 43979117.69 41464279.90 37625132.0
24998 16921721 16218709.31 13533193.47 10013329.68 8468636.24 7779268.10 7903730.0
24999 23722133 14329118.18 14532372.60 14002597.92 8792018.72 8034307.29 8205492.0

Calculate the difference in actual ('yr0_exe') and predicted ('predict_values') model values and assign a difference value for each record in the 'delta' column

In [31]:
predict_full['delta'] = (predict_full['yr-2_oblig'] - predict_full['predict_values']) / predict_full['yr-2_oblig']
predict_full.tail(10)
Out[31]:
yr+3_forecast yr+2_forecast yr+1_forecast yr0_plan yr-1_approp yr-2_oblig predict_values delta
24990 8666512 7461156.43 8822194.01 9255574.82 8259746.18 7141320.50 7424556.0 -0.039662
24991 24778892 13605874.65 10933862.15 7503179.29 6512696.27 5535517.31 6108544.0 -0.103518
24992 11527898 10119517.86 10636454.97 9449040.98 9744862.91 8841741.56 8883496.0 -0.004722
24993 4823458 4248480.93 4070352.40 3602277.70 3395571.19 2768099.78 3160810.5 -0.141870
24994 26754281 17794607.35 17997860.27 10823436.31 6961458.78 6777291.92 6659742.0 0.017345
24995 9788547 6019195.88 6621337.92 6614968.31 4550778.85 4380361.39 4239308.5 0.032201
24996 16761939 11540837.92 10632601.71 8565822.45 5608820.13 4736385.57 5273779.0 -0.113461
24997 29827735 36874254.12 36124075.74 42167021.18 43979117.69 41464279.90 37625132.0 0.092589
24998 16921721 16218709.31 13533193.47 10013329.68 8468636.24 7779268.10 7903730.0 -0.015999
24999 23722133 14329118.18 14532372.60 14002597.92 8792018.72 8034307.29 8205492.0 -0.021307

Display the histogram of the yr-2_oblig response values

In [32]:
fig = px.histogram(predict_full, x="yr-2_oblig",marginal="rug", # can be `box`, `violin`
                        hover_data=predict_full.columns, color_discrete_sequence=['orange'], opacity = 0.5)
fig.update_layout(
    autosize=True,
    title = "yr-2_oblig (actuals) distribution ")
fig.show()

Display the histogram of the predict_values response values

In [33]:
fig = px.histogram(predict_full, x="predict_values",marginal="rug", # can be `box`, `violin`
                        hover_data=predict_full.columns, color_discrete_sequence=['teal'], opacity = 0.5)
fig.update_layout(
    autosize=True,
    title = "predict values distribution ")
fig.show()

Plot the distribution of the delta percentage values

In [34]:
fig = px.histogram(predict_full, x="delta",marginal="rug", # can be `box`, `violin`
                        hover_data=predict_full.columns, color_discrete_sequence=['indianred'], opacity = 0.5)
fig.update_layout(
    autosize=True,
    title = "Actual vs Prediction value Historgram ")
fig.show()

Display distribution of box & whisker plot for response and predict values

In [35]:
yr5 = dataset['yr-2_oblig']
yr6 = predict_full.predict_values

fig = go.Figure()
fig.add_trace(go.Box(x=yr5, name = "yr-2_oblig (actuals)"))
fig.add_trace(go.Box(x=yr6, name = "predict_values"))
fig.show()

Plot the model evaluation

Retrieve the statistical parameters for the linear model

In [36]:
x = predict_full['yr-2_oblig']
y = predict_full['predict_values']
slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)
print(" Slope: {}\n Intercept: {}\n R-squared: {}\n P-Value: {}\n Standard Error: {}". format(slope, intercept, r_value, p_value, std_err))
 Slope: 0.9593598645379876
 Intercept: 467323.7050779704
 R-squared: 0.9923189716185943
 P-Value: 0.0
 Standard Error: 0.0007564265072980102

Plot the response values (original) against the predicted values

In [37]:
fig = px.scatter(predict_full, x="yr-2_oblig", y="predict_values", trendline="ols", opacity=0.25, color_discrete_sequence=['green'])
fig.update_layout(
    autosize=False,
    width=1000,
    height=750,
    title = "Response values vs predicted values scatterplot",
    xaxis=dict(
        title_text="yr-2_oblig values (Response Values)",
        titlefont=dict(size=20),
    ),


    yaxis=dict(
        title_text="predict_values (Predicted Values)",
        titlefont=dict(size=20),
    )

)
fig.show()

GEOPLOT

Import the Virginia shape file

In [66]:
us_shp = gpd.read_file('data/va_shp/tl_2016_51_cousub.shp')

Plot the shape file

In [67]:
fig, ax = plt.subplots(figsize = (30,30))
us_shp.plot(ax = ax)
Out[67]:
<matplotlib.axes._subplots.AxesSubplot at 0x1a3f956ad0>
In [68]:
df_size = len(predict_full)
df_size
Out[68]:
25000

Create randomized lat long values

In [69]:
# 38.893732, -77.311721
#36.762558, -78.359682
lat_random = np.random.uniform(low=36.762558, high=38.893732, size=df_size)
long_random = np.random.uniform(low=-77.311721, high=-78.359682, size=df_size)
In [70]:
len(long_random)
Out[70]:
25000
In [71]:
temp = predict_full.copy()
geo_df = temp[['yr-2_oblig']]
geo_df['latitude'] = lat_random
geo_df['longitude'] = long_random
geo_df.tail(10)
/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:3: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:4: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

Out[71]:
yr-2_oblig latitude longitude
24990 7141320.50 37.529193 -78.232538
24991 5535517.31 38.537775 -77.448704
24992 8841741.56 37.193523 -77.916613
24993 2768099.78 37.755412 -77.678313
24994 6777291.92 36.779786 -77.584900
24995 4380361.39 37.374526 -77.500882
24996 4736385.57 37.823102 -77.505574
24997 41464279.90 37.450702 -77.464164
24998 7779268.10 37.618579 -77.481662
24999 8034307.29 37.768450 -77.604193

Create a list of points

In [72]:
geometry = [Point(xy) for xy in zip(geo_df["longitude"],geo_df["latitude"])]
geometry[:3]
Out[72]:
[<shapely.geometry.point.Point at 0x1a3f9080d0>,
 <shapely.geometry.point.Point at 0x1a3f9526d0>,
 <shapely.geometry.point.Point at 0x1a3f952410>]
In [73]:
# define our coordinate system
crs = {'init': 'epsg:4326'}

Retrieve only the first 150 points

In [87]:
geo_slice_df = geo_df[:150]
geo_slice_df
Out[87]:
yr-2_oblig latitude longitude geometry
0 5098276.51 38.336702 -78.203386 POINT (-78.20339 38.33670)
1 15839218.15 37.007452 -78.073359 POINT (-78.07336 37.00745)
2 2665865.34 37.298109 -77.727721 POINT (-77.72772 37.29811)
3 5133929.85 37.548611 -77.329045 POINT (-77.32905 37.54861)
4 18954276.41 38.835523 -78.192346 POINT (-78.19235 38.83552)
... ... ... ... ...
145 16949652.00 38.236145 -77.894895 POINT (-77.89489 38.23614)
146 10081321.22 37.280374 -77.804701 POINT (-77.80470 37.28037)
147 6711616.07 37.175871 -77.904744 POINT (-77.90474 37.17587)
148 27806261.47 37.499048 -77.763873 POINT (-77.76387 37.49905)
149 11599766.56 37.893257 -77.583978 POINT (-77.58398 37.89326)

150 rows × 4 columns

Create the geo df object to convert lat/long to points

In [88]:
geo_slice = gpd.GeoDataFrame(geo_slice_df,
                         crs = crs)
geo_slice.to_csv (r'geo_df_export.csv', index = False, header=True)
geo_slice.head()
/opt/anaconda3/lib/python3.7/site-packages/pyproj/crs/crs.py:53: FutureWarning:

'+init=<authority>:<code>' syntax is deprecated. '<authority>:<code>' is the preferred initialization method. When making the change, be mindful of axis order changes: https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6

Out[88]:
yr-2_oblig latitude longitude geometry
0 5098276.51 38.336702 -78.203386 POINT (-78.20339 38.33670)
1 15839218.15 37.007452 -78.073359 POINT (-78.07336 37.00745)
2 2665865.34 37.298109 -77.727721 POINT (-77.72772 37.29811)
3 5133929.85 37.548611 -77.329045 POINT (-77.32905 37.54861)
4 18954276.41 38.835523 -78.192346 POINT (-78.19235 38.83552)

Plot the figure

In [100]:
fig, ax = plt.subplots(figsize = (20,15))
us_shp.plot(ax = ax, alpha = 0.4, color = "grey")


geo_slice[geo_slice['yr-2_oblig'] > 30000000].plot(ax = ax, markersize = 220, color = "red", marker = "o", label = "Subprojects at Severe High Risk")
geo_slice[(geo_slice['yr-2_oblig'] >=10000000) & (geo_slice['yr-2_oblig'] <20000000)].plot(ax = ax, markersize = 40, color = "orange", marker = "^", label = "Subprojects at High Risk")
#geo_df[(geo_df['yr-2_oblig'] >=5000000) & (geo_df['yr-2_oblig'] <10000000)].plot(ax = ax, markersize = 20, color = "yellow", marker = "^", label = "$Subprojects at Medium Risk")
#geo_df[(geo_df['price'] >=100) & (geo_df['price'] <199)].plot(ax = ax, markersize = 20, color = "green", marker = "^", label = "$100 - $199")
#geo_df[geo_df['price'] < 100].plot(ax = ax, markersize = 20, color = "blue", marker = "^", label = "< $100")

plt.title("SUBPROJECTS AT RISK OF NOT MEETING OBLIGATION RATES")
plt.xlabel("Longitude")
plt.ylabel("Latitude")
plt.legend(prop={'size':12})
Out[100]:
<matplotlib.legend.Legend at 0x1a54d11250>
In [ ]:
fig = px.scatter(geo_df[:50], x="longitude", y="latitude", opacity=0.25, size = "predict_values", color_discrete_sequence=['green'])
fig.update_layout(
    autosize=False,
    width=1000,
    height=750,
    title = "US Map of Budget points",
    xaxis=dict(
        title_text="Longitude",
        titlefont=dict(size=20),
    ),


    yaxis=dict(
        title_text="Latitude",
        titlefont=dict(size=20),
    )

)
fig.show()

PROGRAM END

In [ ]:
program_end = t.time() - program_start
elapsed = round(program_end, 2)

print("Total time for program execution is {} seconds".format(elapsed))